61 research outputs found

    Fatgraph expansion for noncritical superstrings

    Get PDF
    We study the fatgraph expansion for the Complex Matrix Quantum Mechanics (CMQM) with a Chern-Simons coupling. In the double-scaling limit this model is believed to describe Type 0A superstrings in 1+1 dimensions in a Ramond-Ramond electric field. With Euclidean time compactified, we show that the RR electric field acts as a chemical potential for vortices living on the Feynman diagrams of the CMQM. We interpret it as evidence that the CMQM Feynman diagrams discretize the NSR formulation of the noncritical Type 0A superstring. We also study T-duality for the CMQM diagrams and propose that a certain complex matrix model is dual to the noncritical Type 0B superstring.Comment: 16 pages, 1 epsi figur

    The difficulty of folding self-folding origami

    Full text link
    Why is it difficult to refold a previously folded sheet of paper? We show that even crease patterns with only one designed folding motion inevitably contain an exponential number of `distractor' folding branches accessible from a bifurcation at the flat state. Consequently, refolding a sheet requires finding the ground state in a glassy energy landscape with an exponential number of other attractors of higher energy, much like in models of protein folding (Levinthal's paradox) and other NP-hard satisfiability (SAT) problems. As in these problems, we find that refolding a sheet requires actuation at multiple carefully chosen creases. We show that seeding successful folding in this way can be understood in terms of sub-patterns that fold when cut out (`folding islands'). Besides providing guidelines for the placement of active hinges in origami applications, our results point to fundamental limits on the programmability of energy landscapes in sheets.Comment: 8 pages, 5 figure

    Learned multi-stability in mechanical networks

    Full text link
    We contrast the distinct frameworks of materials design and physical learning in creating elastic networks with desired stable states. In design, the desired states are specified in advance and material parameters can be optimized on a computer with this knowledge. In learning, the material physically experiences the desired stable states in sequence, changing the material so as to stabilize each additional state. We show that while designed states are stable in networks of linear Hookean springs, sequential learning requires specific non-linear elasticity. We find that such non-linearity stabilizes states in which strain is zero in some springs and large in others, thus playing the role of Bayesian priors used in sparse statistical regression. Our model shows how specific material properties allow continuous learning of new functions through deployment of the material itself

    Learning without neurons in physical systems

    Full text link
    Learning is traditionally studied in biological or computational systems. The power of learning frameworks in solving hard inverse-problems provides an appealing case for the development of `physical learning' in which physical systems adopt desirable properties on their own without computational design. It was recently realized that large classes of physical systems can physically learn through local learning rules, autonomously adapting their parameters in response to observed examples of use. We review recent work in the emerging field of physical learning, describing theoretical and experimental advances in areas ranging from molecular self-assembly to flow networks and mechanical materials. Physical learning machines provide multiple practical advantages over computer designed ones, in particular by not requiring an accurate model of the system, and their ability to autonomously adapt to changing needs over time. As theoretical constructs, physical learning machines afford a novel perspective on how physical constraints modify abstract learning theory.Comment: 25 pages, 6 figure
    • …
    corecore